參考文獻 |
[1] J. Faber and L. M. Fonseca, “How sample size influences research outcomes.,” Dental Press J. Orthod., vol. 19, no. 4, pp. 27–9, 2014.
[2] R. Nambiar, R. Bhardwaj, A. Sethi, and R. Vargheese, “A look at challenges and opportunities of Big Data analytics in healthcare,” in 2013 IEEE International Conference on Big Data, 2013, pp. 17–22.
[3] A. Oussous, F.-Z. Benjelloun, A. Ait Lahcen, and S. Belfkih, “Big Data technologies: A survey,” J. King Saud Univ. - Comput. Inf. Sci., vol. 30, no. 4, pp. 431–448, Oct. 2018.
[4] S. D. Halpern, J. H. T. Karlawish, and J. A. Berlin, “The continuing unethical conduct of underpowered clinical trials.,” JAMA, vol. 288, no. 3, pp. 358–62, Jul. 2002.
[5] K. S. Button et al., “Power failure: why small sample size undermines the reliability of neuroscience,” Nat. Rev. Neurosci., vol. 14, no. 5, pp. 365–376, May 2013.
[6] R. Nambiar, R. Bhardwaj, A. Sethi, and R. Vargheese, “A look at challenges and opportunities of Big Data analytics in healthcare,” in 2013 IEEE International Conference on Big Data, 2013, pp. 17–22.
[7] P. Buneman, “Semistructured data,” Symp. Princ. Database Syst., 1997.
[8] J. Gantz and D. Reinsel, “IDC I V I E W E x t r a c t i n g V a l u e f r o m C h a o s Sponsored by EMC Corporation,” 2011.
[9] E. Mayo-Wilson, T. Li, N. Fusco, and K. Dickersin, “Practical guidance for using multiple data sources in systematic reviews and meta-analyses (with examples from the MUDS study),” Res. Synth. Methods, vol. 9, no. 1, pp. 2–12, Mar. 2018.
[10] A. Gandomi and M. Haider, “Beyond the hype: Big data concepts, methods, and analytics,” Int. J. Inf. Manage., vol. 35, no. 2, pp. 137–144, Apr. 2015.
[11] W. M. Mason, “Statistical Analysis: Multilevel Methods,” Int. Encycl. Soc. Behav. Sci., pp. 381–386, Jan. 2015.
[12] R. Sambasivan, S. Das, and S. K. Sahu, “A Bayesian Perspective of Statistical Machine Learning for Big Data.”
[13] S. Velupillai et al., “Using clinical Natural Language Processing for healthoutcomes research: Overview and actionable suggestions for future advances,” J. Biomed. Inform., vol. 88, pp. 11–19, Dec. 2018.
[14] F. Jiang et al., “Artificial intelligence in healthcare: past, present and future,” Stroke Vasc. Neurol., vol. 2, no. 4, pp. 230–243, Dec. 2017.
[15] Daniyal, W.-J. Wang, M.-C. Su, S.-H. Lee, C.-S. Hung, and C.-C. Chen, “A guideline to determine the training sample size when applying big data mining methods in clinical decision making,” in 2018 IEEE International Conference on Applied System Invention (ICASI), 2018, pp. 678–681.
[16] T. Zhang and B. Yang, “Dimension reduction for big data,” 2018.
[17] A. Farhangfar, L. A. Kurgan, and W. Pedrycz, “A Novel Framework for Imputation of Missing Values in Databases,” IEEE Trans. Syst. Man, Cybern. - Part A Syst. Humans, vol. 37, no. 5, pp. 692–709, Sep. 2007.
[18] J. Luengo, S. García, and F. Herrera, “On the choice of the best imputation methods for missing values considering three groups of classification methods,” Knowl. Inf. Syst., vol. 32, no. 1, pp. 77–108, Jul. 2012.
[19] X. Zhu, S. Zhang, Z. Jin, Z. Zhang, and Z. Xu, “Missing Value Estimation for Mixed-Attribute Data Sets,” IEEE Trans. Knowl. Data Eng., vol. 23, no. 1, pp. 110–121, Jan. 2011.
[20] J. Tian, B. Yu, D. Yu, and S. Ma, “Clustering-based multiple imputation via gray relational analysis for missing data and its application to aerospace field.,” ScientificWorldJournal., vol. 2013, p. 720392, May 2013.
[21] B. Twala, M. Cartwright, and M. Shepperd, “Comparison of various methods for handling incomplete data in software engineering databases,” in 2005 International Symposium on Empirical Software Engineering, 2005., pp. 102–111.
[22] S. Thirukumaran and A. Sumathi, “Missing value imputation techniques depth survey and an imputation Algorithm to improve the efficiency of imputation,” in 2012 Fourth International Conference on Advanced Computing (ICoAC), 2012, pp. 1–5.
[23] F. Keller, E. Muller, and K. Bohm, “HiCS: High Contrast Subspaces for Density-Based Outlier Ranking,” in 2012 IEEE 28th International Conference on Data Engineering, 2012, pp. 1037–1048.
[24] V. J. Hodge and J. Austin, “A Survey of Outlier Detection Methodologies,” Artif. Intell. Rev., vol. 22, no. 2, pp. 85–126, Oct. 2004.[25] S. Chawla and A. Gionis, “k -means–: A unified approach to clustering and outlier detection,” in Proceedings of the 2013 SIAM International Conference on Data Mining, 2013, pp. 189–197.
[26] H. V. Nguyen, E. Müller, J. Vreeken, F. Keller, and K. Böhm, “CMI: An Information-Theoretic Contrast Measure for Enhancing Subspace Cluster and Outlier Detection,” in Proceedings of the 2013 SIAM International Conference on Data Mining, 2013, pp. 198–206.
[27] Shuo Wang and Xin Yao, “Multiclass Imbalance Problems: Analysis and Potential Solutions,” IEEE Trans. Syst. Man, Cybern. Part B, vol. 42, no. 4, pp. 1119–1130, Aug. 2012.
[28] H. Ma, L. Wang, and B. Shen, “A new fuzzy support vector machines for class imbalance learning,” in 2011 International Conference on Electrical and Control Engineering, 2011, pp. 3781–3784.
[29] N. V. Chawla, N. Japkowicz, and A. Kotcz, “Editorial,” ACM SIGKDD Explor. Newsl., vol. 6, no. 1, p. 1, Jun. 2004.
[30] C. Seiffert, T. M. Khoshgoftaar, J. Van Hulse, and A. Napolitano, “A Comparative Study of Data Sampling and Cost Sensitive Learning,” in 2008 IEEE International Conference on Data Mining Workshops, 2008, pp. 46–52.
[31] H. Chen and Z. Yan, “Security and Privacy in Big Data Lifetime: A Review,” Springer, Cham, 2016, pp. 3–15.
[32] R. Bao, Z. Chen, and M. S. Obaidat, “Challenges and techniques in Big data security and privacy: A review,” Secur. Priv., vol. 1, no. 4, p. e13, Jul. 2018.
[33] “Big Data for Sustainable Development | United Nations.” [Online]. Available: http://www.un.org/en/sections/issues-depth/big-data-sustainable-development/index.html. [Accessed: 24-Mar-2019].
[34] B. K. Nayak, “Understanding the relevance of sample size calculation.,” Indian J. Ophthalmol., vol. 58, no. 6, pp. 469–70, 2010.
[35] K. K. Dobbin, Y. Zhao, and R. M. Simon, “How Large a Training Set is Needed to Develop a Classifier for Microarray Data?,” Clin. Cancer Res., vol. 14, no. 1, pp. 108–114, Jan. 2008.
[36] R. L. Figueroa, Q. Zeng-Treitler, S. Kandula, and L. H. Ngo, “Predicting sample size required for classification performance.,” BMC Med. Inform. Decis. Mak., vol. 12, p. 8, Feb. 2012.
[37] D. Moher, C. S. Dulberg, and G. A. Wells, “Statistical power, sample size,and their reporting in randomized controlled trials.,” JAMA, vol. 272, no. 2, pp. 122–4, Jul. 1994.
[38] E. DePoy, L. N. Gitlin, E. DePoy, and L. N. Gitlin, “Statistical Analysis for Experimental-Type Designs,” Introd. to Res., pp. 282–310, Jan. 2016.
[39] A. Banerjee, U. B. Chitnis, S. L. Jadhav, J. S. Bhawalkar, and S. Chaudhury, “Hypothesis testing, type I and type II errors.,” Ind. Psychiatry J., vol. 18, no. 2, pp. 127–31, Jul. 2009.
[40] D. R. (David R. Anderson and D. J. Sweeney, Statistics for business and economics. South-Western Cengage Learning, 2011.
[41] W. G. (William G. Cochran, Sampling techniques. Wiley, 1977.
[42] G. Kalton, Introduction to Survey Sampling. 2455 Teller Road, Thousand Oaks California 91320 United States of America : SAGE Publications, Inc., 1983.
[43] M. H. (Morris H. Hansen, W. N. Hurwitz, and W. G. (William G. Madow, Sample survey methods and theory. Wiley, 1953.
[44] G. D. Israel, “Sampling the Evidence of Extension Program Impact 1.”
[45] J. Cohen, Statistical power analysis for the behavioral sciences. L. Erlbaum Associates, 1988.
[46] G. D. Israel, “Determining Sample Size 1 The Level Of Precision,” 1992.
[47] “Measures of Variability: Range, Interquartile Range, Variance, and Standard Deviation - Statistics By Jim.” [Online]. Available: https://statisticsbyjim.com/basics/variability-range-interquartile-variance-standard-deviation/. [Accessed: 21-Jun-2019].
[48] C. J. Ferguson, “Is psychological research really as good as medical research? Effect size comparisons between psychology and medicine.,” Rev. Gen. Psychol., vol. 13, no. 2, pp. 130–136, Jun. 2009.
[49] R. B. Kline, Beyond significance testing : statistics reform in the behavioral sciences. American Psychological Association, 2013.
[50] V. Bewick, L. Cheek, and J. Ball, “Statistics review 11: assessing risk.,” Crit. Care, vol. 8, no. 4, pp. 287–91, Aug. 2004.
[51] NCSS, “PASS Sample Size Software Standard Deviation Estimator.”
[52] “Population and sample standard deviation review (article) | KhanAcademy.” [Online]. Available: https://www.khanacademy.org/math/statistics-probability/summarizing-quantitative-data/variance-standard-deviation-sample/a/population-and-sample-standard-deviation-review. [Accessed: 28-Mar-2019].
[53] F. Weytens, O. Luminet, L. L. Verhofstadt, and M. Mikolajczak, “An Integrative Theory-Driven Positive Emotion Regulation Intervention,” PLoS One, vol. 9, no. 4, p. e95677, Apr. 2014.
[54] J. M. Fuster, M. Bodner, and J. K. Kroger, “Cross-modal and cross-temporal association in neurons of frontal cortex,” Nature, vol. 405, no. 6784, pp. 347–351, May 2000.
[55] R. M. Hansen and A. B. Fulton, “Background adaptation in children with a history of mild retinopathy of prematurity.,” Invest. Ophthalmol. Vis. Sci., vol. 41, no. 1, pp. 320–4, Jan. 2000.
[56] T. C. A. Freeman and T. A. Fowler, “Unequal retinal and extra-retinal motion signals produce different perceived slants of moving surfaces,” Vision Res., vol. 40, no. 14, pp. 1857–1868, Jun. 2000.
[57] M. Laubach, J. Wessberg, and M. A. L. Nicolelis, “Cortical ensemble activity increasingly predicts behaviour outcomes during learning of a motor task,” Nature, vol. 405, no. 6786, pp. 567–571, Jun. 2000.
[58] C. Braun, R. Schweizer, T. Elbert, N. Birbaumer, and E. Taub, “Differential activation in somatosensory cortex for different discrimination tasks.,” J. Neurosci., vol. 20, no. 1, pp. 446–50, Jan. 2000.
[59] M. G. Bloj, D. Kersten, and A. C. Hurlbert, “Perception of three-dimensional shape influences colour perception through mutual illumination,” Nature, vol. 402, no. 6764, pp. 877–879, Dec. 1999.
[60] F. Bonato and J. Cataliotti, “The effects of figure/ground, perceived area, and target saliency on the luminosity threshold.,” Percept. Psychophys., vol. 62, no. 2, pp. 341–9, Feb. 2000.
[61] C.-C. Chen, S.-H. Lee, W.-J. Wang, Y.-C. Lin, and M.-C. Su, “EEG-based motor network biomarkers for identifying target patients with stroke for upper limb rehabilitation and its construct validity,” PLoS One, vol. 12, no. 6, p. e0178822, Jun. 2017.
[62] D. G. Altman, “Statistics and ethics in medical research: III How large a sample?,” Br. Med. J., vol. 281, no. 6251, pp. 1336–8, Nov. 1980.[63] J. P. Ioannidis, A. B. Haidich, and J. Lau, “Any casualties in the clash of randomised and observational evidence?,” BMJ, vol. 322, no. 7291, pp. 879–80, Apr. 2001.
[64] H. M. Colhoun, P. M. McKeigue, and G. Davey Smith, “Problems of reporting genetic associations with complex outcomes.,” Lancet (London, England), vol. 361, no. 9360, pp. 865–72, Mar. 2003.
[65] J. P. A. Ioannidis, “Genetic associations: false or true?,” Trends Mol. Med., vol. 9, no. 4, pp. 135–8, Apr. 2003.
[66] S. Wacholder, S. Chanock, M. Garcia-Closas, L. El ghormli, and N. Rothman, “Assessing the Probability That a Positive Report is False: An Approach for Molecular Epidemiology Studies,” JNCI J. Natl. Cancer Inst., vol. 96, no. 6, pp. 434–442, Mar. 2004.
[67] G. A. Barnard, “Must clinical trials be large? The interpretation ofp-values and the combination of test results,” Stat. Med., vol. 9, no. 6, pp. 601–614, Jun. 1990.
[68] P. M. Fayers and D. Machin, “Sample size: how many patients are necessary?,” Br. J. Cancer, vol. 72, no. 1, pp. 1–9, Jul. 1995.
[69] A. Kagan and L. A. Shepp, “Why the variance?,” Stat. Probab. Lett., vol. 38, no. 4, pp. 329–333, Jul. 1998.
[70] L. A. Goodman, “On the Exact Variance of Products,” J. Am. Stat. Assoc., vol. 55, no. 292, p. 708, Dec. 1960.
[71] M. J. Salganik, “Variance estimation, design effects, and sample size calculations for respondent-driven sampling.,” J. Urban Health, vol. 83, no. 6 Suppl, pp. i98-112, Nov. 2006.
[72] D. Rajnarayan and D. Wolpert, “Bias-Variance Techniques for Monte Carlo Optimization: Cross-validation for the CE Method,” Oct. 2008.
[73] “Population & Sample Variance: Definition, Formula & Examples - Video & Lesson Transcript | Study.com.” [Online]. Available: https://study.com/academy/lesson/population-sample-variance-definition-formula-examples.html. [Accessed: 21-Jun-2019].
[74] R. N. Forthofer, E. S. Lee, and M. Hernandez, Biostatistics : a guide to design, analysis, and discovery. .
[75] S. G. Kwak and J. H. Kim, “Central limit theorem: the cornerstone of modern statistics,” Korean J. Anesthesiol., vol. 70, no. 2, p. 144, Apr. 2017.[76] A. L. Blum and P. Langley, “Selection of relevant features and examples in machine learning,” Artif. Intell., vol. 97, no. 1–2, pp. 245–271, Dec. 1997.
[77] A. Goltsev and V. Gritsenko, “Investigation of efficient features for image recognition by neural networks,” Neural Networks, vol. 28, pp. 15–23, Apr. 2012.
[78] A. Khotanzad and Y. H. Hong, “Rotation invariant image recognition using features selected via a systematic method,” Pattern Recognit., vol. 23, no. 10, pp. 1089–1101, Jan. 1990.
[79] T. W. Rauber, F. de Assis Boldt, and F. M. Varejao, “Heterogeneous Feature Models and Feature Selection Applied to Bearing Fault Diagnosis,” IEEE Trans. Ind. Electron., vol. 62, no. 1, pp. 637–646, Jan. 2015.
[80] D. L. Swets and J. J. Weng, “Efficient content-based image retrieval using automatic feature selection,” in Proceedings of International Symposium on Computer Vision - ISCV, pp. 85–90.
[81] F. Amiri, M. Rezaei Yousefi, C. Lucas, A. Shakery, and N. Yazdani, “Mutual information-based feature selection for intrusion detection systems,” J. Netw. Comput. Appl., vol. 34, no. 4, pp. 1184–1199, Jul. 2011.
[82] Guangrong Li, Xiaohua Hu, Xiajiong Shen, Xin Chen, and Zhoujun Li, “A novel unsupervised feature selection method for bioinformatics data sets through feature clustering,” in 2008 IEEE International Conference on Granular Computing, 2008, pp. 41–47.
[83] Qinbao Song, Jingjie Ni, and Guangtao Wang, “A Fast Clustering-Based Feature Subset Selection Algorithm for High-Dimensional Data,” IEEE Trans. Knowl. Data Eng., vol. 25, no. 1, pp. 1–14, Jan. 2013.
[84] D. D. Lewis, Y. Yang, T. G. Rose, F. Li, and F. Li LEWIS, “RCV1: A New Benchmark Collection for Text Categorization Research,” 2004.
[85] Z. Zhao, Z. Zhao, F. Morstatter, S. Sharma, A. Anand, and H. Liu, “Advancing Feature Selection Research − ASU Feature Selection Repository.”
[86] P. Langley, “Selection of Relevant Features in Machine Learning,” 1994.
[87] P. Langley and M. A., Proceedings of the Seventeenth International Conference on Machine Learning (ICML-2000), June 29-July 2, 2000, Stanford University. Morgan Kaufmann Publishers, 2000.
[88] I. Kononenko, “Estimating attributes: Analysis and extensions of RELIEF,”Springer, Berlin, Heidelberg, 1994, pp. 171–182.
[89] L. Yu and H. Liu, “Efficient Feature Selection via Analysis of Relevance and Redundancy,” J. Mach. Learn. Res., vol. 5, no. Oct, pp. 1205–1224, 2004.
[90] D. Ienco and R. Meo, “Exploration and Reduction of the Feature Space by Hierarchical Clustering,” in Proceedings of the 2008 SIAM International Conference on Data Mining, 2008, pp. 577–587.
[91] D. M. Witten and R. Tibshirani, “A Framework for Feature Selection in Clustering,” J. Am. Stat. Assoc., vol. 105, no. 490, pp. 713–726, Jun. 2010.
[92] I. Guyon, J. Weston, S. Barnhill, and V. Vapnik, “Gene Selection for Cancer Classification using Support Vector Machines,” Mach. Learn., vol. 46, no. 1/3, pp. 389–422, 2002.
[93] K. Michalak, H. Kwa´snicka, and K. Kwa´snicka, “CORRELATION-BASED FEATURE SELECTION STRATEGY IN CLASSIFICATION PROBLEMS,” 2006.
[94] W. H. Hsu, “Genetic wrappers for feature selection in decision tree induction and variable ordering in Bayesian network structure learning,” Inf. Sci. (Ny)., vol. 163, no. 1–3, pp. 103–122, Jun. 2004.
[95] M. Dash, H. Liu, and J. Yao, “Dimensionality reduction of unsupervised data,” in Proceedings Ninth IEEE International Conference on Tools with Artificial Intelligence, pp. 532–539.
[96] N. Vandenbroucke, L. Macaire, and J.-G. Postaire, “Unsupervised color texture feature extraction and selection for soccer image segmentation,” in Proceedings 2000 International Conference on Image Processing (Cat. No.00CH37101), 2000, pp. 800–803 vol.2.
[97] M. Alibeigi, S. Hashemi, and A. Hamzeh, “Unsupervised Feature Selection Based on the Distribution of Features Attributed to Imbalanced Data Sets,” 2011.
[98] M. Pabitra, C. A. Murthy, and S. K. Pal, “Unsupervised Feature Selection Using Feature Similarity.”
[99] P.-Y. Zhou and K. C. C. Chan, “An unsupervised attribute clustering algorithm for unsupervised feature selection,” in 2015 IEEE International Conference on Data Science and Advanced Analytics (DSAA), 2015, pp. 1–7.
[100] R. Agrawal et al., “Automatic subspace clustering of high dimensional data for data mining applications,” ACM SIGMOD Rec., vol. 27, no. 2, pp. 94–105, Jun. 1998.
[101] B. Mirkin, “Concept Learning and Feature Selection Based on Square-Error Clustering,” Mach. Learn., vol. 35, no. 1, pp. 25–39, 1999.
[102] “Feature Selection for Unsupervised Learning.” [Online]. Available: https://dl.acm.org/citation.cfm?id=1016787. [Accessed: 05-Apr-2019].
[103] J. Z. Huang et al., “Weighting Method for Feature Selection in K-Means,” pp. 209–226, Oct. 2007.
[104] G. Doquire and M. Verleysen, “A graph Laplacian based approach to semi-supervised feature selection for regression problems,” Neurocomputing, vol. 121, pp. 5–13, Dec. 2013.
[105] M. Yang, Y.-J. Chen, and G.-L. Ji, “Semi_Fisher Score: A semi-supervised method for feature selection,” in 2010 International Conference on Machine Learning and Cybernetics, 2010, pp. 527–532.
[106] K. Benabdeslem and M. Hindawi, “Constrained Laplacian Score for Semi-supervised Feature Selection,” Springer, Berlin, Heidelberg, 2011, pp. 204–218.
[107] B. Yegnanarayana, “Artificial neural networks for pattern recognition,” Sadhana, vol. 19, no. 2, pp. 189–238, Apr. 1994.
[108] K. Benabdeslem and M. Hindawi, “Efficient Semi-Supervised Feature Selection: Constraint, Relevance, and Redundancy,” IEEE Trans. Knowl. Data Eng., vol. 26, no. 5, pp. 1131–1143, May 2014.
[109] Y. Wang, J. Wang, H. Liao, and H. Chen, “An efficient semi-supervised representatives feature selection algorithm based on information theory,” Pattern Recognit., vol. 61, pp. 511–523, Jan. 2017.
[110] R. Nambiar, R. Bhardwaj, A. Sethi, and R. Vargheese, “A look at challenges and opportunities of Big Data analytics in healthcare,” in 2013 IEEE International Conference on Big Data, 2013, pp. 17–22.
[111] D. T. Hau and E. W. Coiera, “Learning Qualitative Models of Dynamic Systems,” Mach. Learn., vol. 26, no. 2/3, pp. 177–211, 1997.
[112] K. M. Al-Aidaroo, A. A. Bakar, and Z. Othman, “Medical Data Classification with Naive Bayes Approach,” Inf. Technol. J., vol. 11, no. 9, pp. 1166–1174, Sep. 2012.
[113] M. Ringnér, “What is principal component analysis?,” Nat. Biotechnol., vol.26, no. 3, pp. 303–304, Mar. 2008.
[114] T. Raykov and G. A. Marcoulides, “Population Proportion of Explained Variance in Principal Component Analysis: A Note on Its Evaluation Via a Large-Sample Approach,” Struct. Equ. Model. A Multidiscip. J., vol. 21, no. 4, pp. 588–595, Oct. 2014.
[115] P. Kadam and S. Bhalerao, “Sample size calculation.,” Int. J. Ayurveda Res., vol. 1, no. 1, pp. 55–7, Jan. 2010.
[116] P. B. Vaidya, B. S. R. Vaidya, and S. K. Vaidya, “Response to Ayurvedic therapy in the treatment of migraine without aura.,” Int. J. Ayurveda Res., vol. 1, no. 1, pp. 30–6, Jan. 2010.
[117] B. Röhrig, J.-B. du Prel, D. Wachtlin, R. Kwiecien, and M. Blettner, “Sample size calculation in clinical trials: part 13 of a series on evaluation of scientific publications.,” Dtsch. Arztebl. Int., vol. 107, no. 31–32, pp. 552–6, Aug. 2010.
[118] “Determining the sample size in a clinical trial. - Semantic Scholar.” [Online]. Available: https://www.semanticscholar.org/paper/Determining-the-sample-size-in-a-clinical-trial.-Kirby-Gebski/7faa0337887d7ab6b67a40424144168257af28fa#paper-header. [Accessed: 24-Jun-2019].
[119] P. Patra, “Sample size in clinical research, the number we need,” Int. J. Med. Sci. Public Heal., vol. 1, no. 1, pp. 5–10, Jul. 2012.
[120] Jaykaran, N. Kantharia, and P. Yadav, “Reporting of sample size and power in negative clinical trials published in Indian medical journals,” J. Pharm. Negat. Results, vol. 2, no. 2, p. 87, 2011.
[121] J. Cai and D. Zeng, “Sample Size/Power Calculation for Case-Cohort Studies,” Biometrics, vol. 60, no. 4, pp. 1015–1024, Dec. 2004.
[122] V. Kasiulevičius, “Sample size calculation in epidemiological studies Satisfaction with primary helthecare in Lithuania View project,” 2006.
[123] M. Borenstein, H. Rothstein, J. Cohen, and Lawrence Erlbaum Associates., Power and precision : a computer program for statistical power analysis and confidence intervals. Lawrence Erlbaum Associates, 1997.
[124] R. G. O’brien, “UnifyPow: A SAS Macro for Sample-Size Analysis.”
[125] G. Welk, Physical activity assessments for health-related research. Human Kinetics, 2002.[126] “Sample Size Calculator - Confidence Level, Confidence Interval, Sample Size, Population Size, Relevant Population.” [Online]. Available: https://www.surveysystem.com/sscalce.htm. [Accessed: 27-Mar-2019].
[127] P. Bacchetti, C. E. McCulloch, and M. R. Segal, “Simple, Defensible Sample Sizes Based on Cost Efficiency,” Biometrics, vol. 64, no. 2, pp. 577–585, Jun. 2008.
[128] S. D. Halpern, J. H. T. Karlawish, and J. A. Berlin, “The continuing unethical conduct of underpowered clinical trials.,” JAMA, vol. 288, no. 3, pp. 358–62, Jul. 2002.
[129] H. C. Kraemer, J. Mintz, A. Noda, J. Tinklenberg, and J. A. Yesavage, “Caution Regarding the Use of Pilot Studies to Guide Power Calculations for Study Proposals,” Arch. Gen. Psychiatry, vol. 63, no. 5, p. 484, May 2006.
[130] D. G. Altman et al., “The Revised CONSORT Statement for Reporting Randomized Trials: Explanation and Elaboration,” Ann. Intern. Med., vol. 134, no. 8, p. 663, Apr. 2001.
[131] M. J. Gardner and D. G. Altman, “Confidence intervals rather than P values: estimation rather than hypothesis testing.,” BMJ, vol. 292, no. 6522, pp. 746–750, Mar. 1986.
[132] J. Ranstam, “Why the P-value culture is bad and confidence intervals a better alternative,” Osteoarthr. Cartil., vol. 20, no. 8, pp. 805–808, Aug. 2012.
[133] S. N. Goodman, “p Values, Hypothesis Tests, and Likelihood: Implications for Epidemiology of a Neglected Historical Debate,” Am. J. Epidemiol., vol. 137, no. 5, pp. 485–496, Mar. 1993.
[134] G. H. Guyatt, E. J. Mills, and D. Elbourne, “In the Era of Systematic Reviews, Does the Size of an Individual Trial Still Matter?,” PLoS Med., vol. 5, no. 1, p. e4, Jan. 2008.
[135] S. Edwards, R. Lilford, D. Braunholtz, and J. Jackson, “Why ‘underpowered’ trials are not necessarily unethical,” Lancet, vol. 350, no. 9080, pp. 804–807, Sep. 1997.
[136] S. Borra and A. Di Ciaccio, “Measuring the prediction error. A comparison of cross-validation, bootstrap and covariance penalty methods,” Comput. Stat. Data Anal., vol. 54, no. 12, pp. 2976–2989, Dec. 2010.
[137] G. Gong, “Cross-Validation, the Jackknife, and the Bootstrap: Excess Error Estimation in Forward Logistic Regression,” J. Am. Stat. Assoc., vol. 81, no.393, pp. 108–113, Mar. 1986.
[138] M. W. Browne, “Cross-Validation Methods,” J. Math. Psychol., vol. 44, no. 1, pp. 108–132, Mar. 2000.
[139] J. Brownlee, What is the Difference Between Test and Validation Datasets? 2017.
[140] S. G. Kwak and J. H. Kim, “Central limit theorem: the cornerstone of modern statistics,” Korean J. Anesthesiol., vol. 70, no. 2, p. 144, Apr. 2017.
[141] M. R. Siegle, C. L. K. Robinson, and J. Yakimishyn, “The Effect of Region, Body Size, and Sample Size on the Weight-Length Relationships of Small-Bodied Fishes Found in Eelgrass Meadows,” Northwest Sci., vol. 88, no. 2, pp. 140–154, May 2014.
[142] A. M. Verdery, T. Mouw, S. Bauldry, and P. J. Mucha, “Network Structure and Biased Variance Estimation in Respondent Driven Sampling,” PLoS One, vol. 10, no. 12, p. e0145296, Dec. 2015.
[143] P. Sulewski, “On Differently Defined Skewness,” Comput. Methods Sci. Technol., vol. 14, no. 1, pp. 39–46, 2008.
[144] H. G.-M. Kim, D. Richardson, D. Loomis, M. Van Tongeren, and I. Burstyn, “Bias in the estimation of exposure effects with individual- or group-based exposure assessment,” J. Expo. Sci. Environ. Epidemiol., vol. 21, no. 2, pp. 212–221, Mar. 2011.
[145] X. Wu et al., “Top 10 algorithms in data mining,” Knowl. Inf. Syst., vol. 14, no. 1, pp. 1–37, Jan. 2008.
[146] E. DePoy, L. N. Gitlin, E. DePoy, and L. N. Gitlin, “Statistical Analysis for Experimental-Type Designs,” Introd. to Res., pp. 282–310, Jan. 2016.
[147] D. H. Wolpert, “Ubiquity symposium: Evolutionary computation and the processes of life: what the no free lunch theorems really mean: how to improve search algorithms,” Ubiquity, vol. 2013, no. December, pp. 1–15, Dec. 2013.
[148] S. Nowlan and D. H. Wolpert, “Communicated by The Lack of A Priori Distinctions Between Learning Algorithms.”
[149] E. R. DeLong, D. M. DeLong, and D. L. Clarke-Pearson, “Comparing the areas under two or more correlated receiver operating characteristic curves: a nonparametric approach.,” Biometrics, vol. 44, no. 3, pp. 837–45, Sep. 1988.[150] A. Banerjee, U. B. Chitnis, S. L. Jadhav, J. S. Bhawalkar, and S. Chaudhury, “Hypothesis testing, type I and type II errors.,” Ind. Psychiatry J., vol. 18, no. 2, pp. 127–31, Jul. 2009.
[151] A. Banerjee, U. B. Chitnis, S. L. Jadhav, J. S. Bhawalkar, and S. Chaudhury, “Hypothesis testing, type I and type II errors.,” Ind. Psychiatry J., vol. 18, no. 2, pp. 127–31, Jul. 2009.
[152] A. Kirby, V. Gebski, and A. C. Keech, “Determining the sample size in a clinical trial.,” Med. J. Aust., vol. 177, no. 5, pp. 256–7, Sep. 2002.
[153] M. Noordzij, G. Tripepi, F. W. Dekker, C. Zoccali, M. W. Tanck, and K. J. Jager, “Sample size calculations: basic principles and common pitfalls,” Nephrol. Dial. Transplant., vol. 25, no. 5, pp. 1388–1393, May 2010.
[154] X. Guo, Y. Yin, C. Dong, G. Yang, and G. Zhou, “On the Class Imbalance Problem,” in 2008 Fourth International Conference on Natural Computation, 2008, pp. 192–201.
[155] S. Visa and S. Visa, “Issues in mining imbalanced data sets - a review paper,” Proc. Sixt. MIDWEST Artif. Intell. Cogn. Sci. Conf. 2005, pp. 67--73. |